763 research outputs found

    Computation of Smooth Optical Flow in a Feedback Connected Analog Network

    Get PDF
    In 1986, Tanner and Mead \cite{Tanner_Mead86} implemented an interesting constraint satisfaction circuit for global motion sensing in aVLSI. We report here a new and improved aVLSI implementation that provides smooth optical flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as the aperture problem. However, the optical flow can be estimated by the use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized. We show how the algorithmic constraints of Horn and Schunck \cite{Horn_Schunck81} on computing smooth optical flow can be mapped onto the physical constraints of an equivalent electronic network

    Neuromorphic analogue VLSI

    Get PDF
    Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do

    Adaptive Neural Coding Dependent on the Time-Varying Statistics of the Somatic Input Current

    Get PDF
    It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast) of the time-varying somatic input current. The adaptation algorithm estimates the somatic current signal from the spike train by way of the intracellular somatic calcium concentration, thereby continuously adjusting the neuronś firing dynamics. This principle is shown to work in an analog VLSI-designed silicon neuron

    Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks

    Get PDF
    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018

    Competition through selective inhibitory synchrony

    Full text link
    Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with each other.Comment: in press at Neural computation; 4 figure

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    The California Bungalow and the Tyrolean Chalet: The Ill-Fated Life of an American Vernacular

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71993/1/j.1542-734X.1992.1504_1.x.pd

    Models Of Group Productivity And Affiliation-related Motives

    Get PDF
    Using Steiner\u27s (1971) model of group performance, an experiment was performed to investigate the relationship among affiliation-related motives, task demands, group size and group productivity. It was suggested that affiliation-related motives may mediate the effects of group size and task demands on group productivity. That is, when task demands were disjunctive it was hypothesized that approval-oriented as compared to rejection-threatened persons would increase their productivity as group size increased. Under conjunctive task demands it was predicted that increased group size would lead to decreased productivity for rejection-threatened as compared to approval-oriented members.;Affiliation-related motives were measured using Short\u27s (1980) measure of resultant affiliation motivation. Subjects were classfied as high, moderate or low on this measure and the experimental design included two levels of group size (i. e., two, six) and two levels of task demands (i.e., disjunctive, conjunctive). There were an equal number of groups in each cell of the design and groups were homogeneous with respect to motive designation. As task demands was a within-subjects factor, the order in which subjects performed the tasks was counterbalanced.;Results obtained from analysis of variance did not support the predicted three-way interaction among resultant affiliation motivation, group size and task demands. However, results obtained using a measure designed to validate Steiner\u27s model did yield the predicted three-way interaction.;A second study was designed to examine the possible effects of overmotivation on the predicted three-way interaction. Contrary to an overmotivation explanation, results indicated that approval-oriented persons performing under disjunctive task demands did better when approval incentives in the group situation were high rather than low. Interestingly, the three-way interaction appeared to be due to the approval-oriented performing better than the rejection-threatened in the disjunctive task when anticipating future interaction and better in the conjunctive task when future interaction was not anticipated.;Results were interpreted as offering limited support for motivational predictions related to Steiner\u27s model, but suggestive of a number of potentially fruitful avenues of research

    A Framework for Modeling the Growth and Development of Neurons and Networks

    Get PDF
    The development of neural tissue is a complex organizing process, in which it is difficult to grasp how the various localized interactions between dividing cells leads relentlessly to global network organization. Simulation is a useful tool for exploring such complex processes because it permits rigorous analysis of observed global behavior in terms of the mechanistic axioms declared in the simulated model. We describe a novel simulation tool, CX3D, for modeling the development of large realistic neural networks such as the neocortex, in a physical 3D space. In CX3D, as in biology, neurons arise by the replication and migration of precursors, which mature into cells able to extend axons and dendrites. Individual neurons are discretized into spherical (for the soma) and cylindrical (for neurites) elements that have appropriate mechanical properties. The growth functions of each neuron are encapsulated in set of pre-defined modules that are automatically distributed across its segments during growth. The extracellular space is also discretized, and allows for the diffusion of extracellular signaling molecules, as well as the physical interactions of the many developing neurons. We demonstrate the utility of CX3D by simulating three interesting developmental processes: neocortical lamination based on mechanical properties of tissues; a growth model of a neocortical pyramidal cell based on layer-specific guidance cues; and the formation of a neural network in vitro by employing neurite fasciculation. We also provide some examples in which previous models from the literature are re-implemented in CX3D. Our results suggest that CX3D is a powerful tool for understanding neural development

    Artificial Cognitive Systems: From VLSI Networks of Spiking Neurons to Neuromorphic Cognition

    Get PDF
    Neuromorphic engineering (NE) is an emerging research field that has been attempting to identify neural types of computational principles, by implementing biophysically realistic models of neural systems in Very Large Scale Integration (VLSI) technology. Remarkable progress has been made recently, and complex artificial neural sensory-motor systems can be built using this technology. Today, however, NE stands before a large conceptual challenge that must be met before there will be significant progress toward an age of genuinely intelligent neuromorphic machines. The challenge is to bridge the gap from reactive systems to ones that are cognitive in quality. In this paper, we describe recent advancements in NE, and present examples of neuromorphic circuits that can be used as tools to address this challenge. Specifically, we show how VLSI networks of spiking neurons with spike-based plasticity mechanisms and soft winner-take-all architectures represent important building blocks useful for implementing artificial neural systems able to exhibit basic cognitive abilitie
    corecore